8 research outputs found

    Learning And Optimization Of The Kernel Functions From Insufficiently Labeled Data

    Get PDF
    Amongst all the machine learning techniques, kernel methods are increasingly becoming popular due to their efficiency, accuracy and ability to handle high-dimensional data. The fundamental problem related to these learning techniques is the selection of the kernel function. Therefore, learning the kernel as a procedure in which the kernel function is selected for a particular dataset is highly important. In this thesis, two approaches to learn the kernel function are proposed: transferred learning of the kernel and an unsupervised approach to learn the kernel. The first approach uses transferred knowledge from unlabeled data to cope with situations where training examples are scarce. Unlabeled data is used in conjunction with labeled data to construct an optimized kernel using Fisher discriminant analysis and maximum mean discrepancy. The accuracy of classification which indicates the number of correctly predicted test examples from the base kernels and the optimized kernel are compared in two datasets involving satellite images and synthetic data where proposed approach produces better results. The second approach is an unsupervised method to learn a linear combination of kernel functions

    Adversarial Variational Embedding for Robust Semi-supervised Learning

    Full text link
    Semi-supervised learning is sought for leveraging the unlabelled data when labelled data is difficult or expensive to acquire. Deep generative models (e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial Networks (GANs) have recently shown promising performance in semi-supervised classification for the excellent discriminative representing ability. However, the latent code learned by the traditional VAE is not exclusive (repeatable) for a specific input sample, which prevents it from excellent classification performance. In particular, the learned latent representation depends on a non-exclusive component which is stochastically sampled from the prior distribution. Moreover, the semi-supervised GAN models generate data from pre-defined distribution (e.g., Gaussian noises) which is independent of the input data distribution and may obstruct the convergence and is difficult to control the distribution of the generated data. To address the aforementioned issues, we propose a novel Adversarial Variational Embedding (AVAE) framework for robust and effective semi-supervised learning to leverage both the advantage of GAN as a high quality generative model and VAE as a posterior distribution learner. The proposed approach first produces an exclusive latent code by the model which we call VAE++, and meanwhile, provides a meaningful prior distribution for the generator of GAN. The proposed approach is evaluated over four different real-world applications and we show that our method outperforms the state-of-the-art models, which confirms that the combination of VAE++ and GAN can provide significant improvements in semisupervised classification.Comment: 9 pages, Accepted by Research Track in KDD 201

    A survey of the state of the art in learning the kernels

    No full text
    Abstract In recent years the machine learning community has witnessed a tremendous growth in the development of kernel-based learning algorithms. However, the performance of this class of algorithms greatly depends on the choice of the kernel function. Kernel function implicitly represents the inner product between a pair of points of a dataset in a higher dimensional space. This inner product amounts to the similarity between points and provide a solid foundation for nonlinear analysis in kernel-based learning algorithms. The most important challenge in kernel-based learning is the selection of an appropriate kernel for a given dataset. To remedy this problem, algorithms to learn the kernel have recently been proposed. These methods formulate a learning algorithm that finds an optimal kernel for a given dataset. In this paper, we present an overview of these algorithms and provide a comparison of various approaches to find an optimal kernel. Furthermore, a list of pivotal issues that lead to efficient design of such algorithms will be presented

    A note on solving the fuzzy Sylvester matrix equation

    No full text
    In this work, we present theoretical analysis of the solution of Fuzzy Sylvester Matrix Equation (FSME) in the form AX̃+ X̃B = C̃. The necessary and sufficient conditions for the existence of fuzzy solutions are proposed and some operators to finding
    corecore